perm filename PSYCHO.3[E83,JMC] blob sn#720281 filedate 1983-07-18 generic text, type T, neo UTF8
.require "memo.pub[let,jmc]" source;
.double space
.cb THE LITTLE THOUGHTS OF THINKING MACHINES

	When we interact with computers and other machines, we often use
language ordinarily used for talking about people.  We may say of a
vending machine, "It wants another ten cents for a lousy candy bar."
We may say of an automatic teller machine, "It thinks I don't have
enough money in my account because it doesn't yet know about the
deposit I made this morning." This article is about when we're right
or almost right in saying these things, and when it's a good idea to
think of machines that way.

	For more than a century we have used machines in our daily lives whose
detailed functioning most of us don't understand.  Few of us know much
about how the electric light system or the telephone system work
internally.  We do know their external behavior; we know that lights are
turned on and off by switches and how to dial telephone numbers.  We
may not know much about the internal combustion engine, but we know
that a car needs more gas when the gauge reads near EMPTY.

	In the next century we'll be increasingly faced with much more complex
computer based systems.  It won't be necessary for many people to
know very much about how they work internally, but what we will have
to know about them in order to use them is more complex than what we
need to know about electric lights and telephones.  As our daily lives
involve ever more sophisticated computers, we will find that ascribing
little thoughts to machines will be increasingly useful in understanding
how to get the most good out of them.

	A lot of what we'll have to know concerns the information stored
in computers, which is why we find ourselves using psychological
words like "knows", "thinks", and "wants" in referring to machines,
even though machines are very different from humans and these words
arose from the human need to talk about other humans.

	According to some authorities, to use these words, the language of
the mind, to talk about machines is to commit the error of
anthropomorphism.  Anthropomorphism is often an error, all right, but it
is going to be increasingly difficult to understand machines without
using mental terms.

	Ever since Descartes, philosophically minded people have wrestled
with the question of whether it is possible for machines to think.  As
we interact more and more with computers - both personal computers and
others - the questions of whether machines can think and what kind of
thoughts they can have become ever more pertinent.  We can ask whether
machines remember, believe, know, intend, like or dislike, want,
understand, promise, owe, have rights or duties, or deserve rewards
or punishment.  Is this an all-or-nothing question, or can we say that
some machines do some of these things and not others, or that they do
them to some extent?

	My answer, based on some 30 years working in the field of artificial
intelligence, is that machines have some intellectual qualities to
differing extents.  Even some very simple machines can be usefully regarded
as having some intellectual qualities.  Machines can and will be
given more and more intellectual qualities; not even human intelligence
is a limit.  However, artificial intelligence is a difficult branch of
science and engineering, and, judging by present slow progress, it
might take a long time.  The science of genetics took 100 years between
Mendel's experiments with peas and cracking the DNA code for proteins,
and it isn't done yet.

	Present machines have almost no emotional qualities, and, in my
opinion, it would be a bad idea to give them any.  We have enough
trouble figuring out our duties to our fellow humans and to animals
without creating a bunch of robots with qualities that would allow
anyone to feel sorry for them or would allow them to feel sorry for
themselves.
	Since I advocate some anthropomorphism, I'd better explain what I
consider good or bad anthropomorphism.  Anthropomorphism is the 
ascription of human characteristics to things not human.  When is it
a good idea to do this? When it says something that cannot as conveniently
be said some other way.

	Don't get me wrong.  The kind of anthropomorphism where someone
says, "This terminal hates me!" and bashes it, is just as silly as
ever.  It is also common to ascribe personalities to cars, boats,
and other machinery.  It is hard to say whether anyone takes this
seriously.  Anyway, I'm not supporting any of these things.  

	The reason for ascribing mental qualities and mental processes
to machines is the same as for ascribing them to other people.  It
helps understand what they will do, how our actions will affect
them, and how to compare them with ourselves.

	Researchers in artificial intelligence (AI) are interested in the
use of mental terms to describe machines for two reasons.  First,
we'd like to provide machines with theories of knowledge and belief
so they can reason about what their users know, don't know, and
want.  Second, what the user knows about the machine can often best
be expressed using mental terms.

	Suppose I'm using an automatic teller machine at my bank.  I may
make statements about it like, "It won't give me any cash because it
knows there's no money in my account," or, "It knows who I am because
I gave it my secret number." It is not true that the teller
machine has the thought, "There's no money in his account," and so
refuses to give me cash.  But it was designed to act as if it does,
and if I want to figure out how to make it give me cash in the future,
I should treat it as if it knows that sort of thing.  "If it walks
like a duck and quacks like a duck, then it's a duck."

	It's difficult to be rigorous about whether a machine really
"knows", "thinks", etc., because we're hard put to define these
things.  We understand human mental processes only slightly better
than a fish understands swimming.

	Consider the following example of the instructions that came with
an electric blanket.  "Place the control near the bed in a place that
is neither hotter nor colder than the room itself.  If the control
is placed on a radiator or radiant heated floors, it will "think"
the entire room is hot and will lower your blanket temperature, making
your bed too cold.  If the control is placed on the window sill
in a cold draft, it will "think" the entire room is cold and will
heat up your bed so it will be too hot."

	I suppose some philosophers, psychologists, and English teachers
would maintain that the blanket manufacturer is guilty of anthropomorphism
and some will claim that great harm can come from thus ascribing
to machines qualities which only humans can have.  I argue
that saying that the blanket thermostat "thinks" is ok; they could
even have left off the quotes.  Moreover, this helps us understand
how the thermostat works.  The example is extreme, because most
people don't need the word "think" to understand how a thermostatic
control works.  Nevertheless, the blanket manufacturer was probably
right in thinking that it would help some users.

	Keep in mind that the thermostat can only be properly considered
to have just three possible thoughts or beliefs.  It may believe that
the room is too hot, or that it is too cold, or that it is ok.  It
has no other beliefs; for example, it does not believe that it is
a thermostat.

	The example of the thermostat is a very simple one.  If we had only
thermostats to think about, we wouldn't bother with the concept of
belief at all.  And if all we wanted to think about were zero and one,
we wouldn't bother with the concept of number.

	The automatic teller is a slightly more complicated example.  It
has beliefs like, "There's enough money in the account," and "I don't
give out that much money".  We can imagine a slightly fancier automatic
teller that handles loans, loan payments, traveler's checks,
and so forth, with beliefs like, "The payment wasn't made on time,"
or, "This person is a good credit risk."

	The next example is adapted from the University of California
philosopher John Searle.  A person
who doesn't know Chinese memorizes a book of rules for manipulating Chinese
characters.  The rules tell him how to extract certain parts of a sequence
of characters, how to re-arrange them, and how finally to send back
another sequence of characters.  These rules say nothing about the meaning
of the characters, just how to compute with them.
 He is repeatedly given Chinese sentences, to which he applies the rules, and
gives back what turn out, because of the clever rules to be Chinese sentences
that are appropriate replies.
We suppose that the rules result in a
Chinese conversation so intelligent that the person giving and receiving
the sentences can't tell him from an intelligent Chinese.
This is analogous to a computer, which only obeys its
programming language, but can be programmed such that one can communicate
with it in a different programming language, or in English.
Searle says that since the person in the example doesn't understand
Chinese - even though he can produce intelligent Chinese conversation
by following rules - a computer cannot be said to "understand" things.
He makes no distinction, however, between the hardware (the person)
and the process (the set of rules).  I would argue that the set of
rules understands Chinese, and, analogously, a computer program may
be said to understand things, even if the computer does not.
Both Searle and I are ignoring any practical difficulties.

	Daniel Dennett, Tufts University philosopher, has proposed three
attitudes aimed at understanding a system with which one interacts.

	The first he calls the physical stance.  In this we look at the
system in terms of its physical structure at various levels of organization.
Its parts have their properties and they interact in ways
that we know about.  In principle the analysis can go down to the atoms
and their parts.  Looking at a thermostat from this point of view,
we'd want to understand the working of the bimetal strip that most
thermostats use.  For the automatic teller, we'd want to know about
integrated circuitry, for one thing.  (Let's hope no one's in line
behind us while we do this.

	The second is called the design stance.  In this we analyze something
in terms of the purpose for which it is designed.  Dennett's example of
this is the alarm clock.  We can usually figure out what an alarm
clock will do, e.g.  when it will go off, without knowing whether it
is made of springs and gears or of inegrated circuits.  The user of
alarm clock typically doesn't know or care much about its internal
structure, and this information wouldn't be of much use.  Notice that
when an alarm clock breaks, its repair requires taking the physical
stance.  The design stance can usefully be applied to a thermostat -
it shouldn't be too hard to figure out how to set it, no matter how
it works.  With the automatic teller, things a little less clear.

	The design stance is appropriate not only for machinery but also for
the parts of an organism.  It is amusing that we can't attribute a purpose
for the existence of ants, but we can find a purpose for the
glands in an ant that emit a chemical substance for other ants to follow.

	The third is called the intentional stance, and this is what we'll
often need for understanding computer programs.  In this we try to
understand the behavior of a system by ascribing to it beliefs, goals,
intentions, likes and dislikes, and other mental qualities.  In this
stance we ask ourselves what the thermostat thinks is going on, what
the automatic teller wants from us before it'll give us cash.  We say
things like, "The store's billing computer wants me to pay up, so it
intends to frighten me by sending me threatening letters.  The intentional
stance is most useful when it is the only way of expressing what we know
about a system.

	(For variety Dennett mentions the astrological stance.  In this the
way to think about the future of a human is to pay attention to the
configuration of the stars when he was born.  To determine whether an
enterprise will succeed we determine whether the signs are favorable.
This stance is clearly distinct from the others - and worthless.)

	The mental qualities of present machines are not the same as ours.
While we will probably be able, in the future, to make machines with
mental qualities more like our own, we'll probably never want to deal with
machines that are too much like us.  Who wants to deal with a computer that
loses its temper, or an automatic teller that falls in love? Computers
will end up the the psychology that is convenient to their designers -
(and they'll be fascist bastards if those designers don't think twice.)
Program designers have a tendency to think of the users as idiots who need
to be controlled rather than thinking of their program as a servant, whose
master the user, should be able to control it.  If designers and
programmers think about the apparent mental qualities that are of their
programs, they'll create programs that are easier and pleasanter to deal
with.